10 research outputs found

    Digital, analog, and memristive implementation of Spike-based Synaptic Plasticity

    Get PDF
    Synaptic platicity is believed to play an essential role in learning and memory in the brain. To date, many plasticity algorithms have been devised, some of which confirmed in electrophysiological experiments. Perhaps the most popular synaptic platicity rule, or learning algorithm, among neuromorphic engineers is the Spike Timing Dependent Plasticity (STDP). The conventional form of STDP has been implemented in various forms by many groups and using different hardware approaches. It has been used for applications such as pattern classification. Hoever, a newer form of STDP, which elicits synaptic efficacy modification based on the timing among a triplet of pre- and post-synaptic spikes, has not been well explored in hardware

    Modeling triplet spike-timing-dependent plasticity using memristive devices

    Get PDF
    Triplet-based spike-timing-dependent plasticity (TSTDP) is an advanced synaptic plasticity rule that results in improved learning capability compared to the conventional pair-based STDP (PSTDP). The TSTDP rule can reproduce the results of many electrophysiological experiments, where the PSTDP fails. This paper proposes a novel memristive circuit that implements the TSTDP rule. The proposed circuit is designed using three voltage (flux)-driven memristors. Simulation results demonstrate that our memristive circuit induces synaptic weight changes that arise due to the timing differences among pairs and triplets of spikes. The presented memristive design is an initial step toward developing asynchronous TSTDP learning architectures using memristive devices. These architectures may facilitate the implementation of advanced large-scale neuromorphic systems with applications in real-world engineering tasks such as pattern classification

    Modeling and simulating in-memory memristive deep learning systems: an overview of current efforts

    Get PDF
    Deep Learning (DL) systems have demonstrated unparalleled performance in many challenging engineering applications. As the complexity of these systems inevitably increase, they require increased processing capabilities and consume larger amounts of power, which are not readily available in resource-constrained processors, such as Internet of Things (IoT) edge devices. Memristive In-Memory Computing (IMC) systems for DL, entitled Memristive Deep Learning Systems (MDLSs), that perform the computation and storage of repetitive operations in the same physical location using emerging memory devices, can be used to augment the performance of traditional DL architectures; massively reducing their power consumption and latency. However, memristive devices, such as Resistive Random-Access Memory (RRAM) and Phase-Change Memory (PCM), are difficult and cost-prohibitive to fabricate in small quantities, and are prone to various device non-idealities that must be accounted for. Consequently, the popularity of simulation frameworks, used to simulate MDLS prior to circuit-level realization, is burgeoning. In this paper, we provide a survey of existing simulation frameworks and related tools used to model large-scale MDLS. Moreover, we perform direct performance comparisons of modernized open-source simulation frameworks, and provide insights into future modeling and simulation strategies and approaches. We hope that this treatise is beneficial to the large computers and electrical engineering community, and can help readers better understand available tools and techniques for MDLS development

    Physical implementation of pair-based spike-timing-dependent plasticity

    Get PDF
    Objective Spike-timing-dependent plasticity (STDP) is one of several plasticity rules which leads to learning and memory in the brain. STDP induces synapticweight changes based on the timing of the pre- and postsynaptic neurons. A neural network which can mimic the adaptive capability of biological brains in the temporal domain, requires the weight of single connections to be altered by spike timing. To physically realise this network into silicon, a large number of interconnected STDP circuits on the same substrate is required. This imposes two significant limitations in terms of power and area. To cover these limitations, very large scale integrated circuit (VLSI) technology provides attractive features in terms of low power and small area requirements. An example is demonstrated by (Indiveri et al. 2006). The objective of this paper is to present a newimplementation of the STDPcircuit which demonstrates better power and area in comparison to previous implementations. Methods The proposed circuit uses complementary metal oxide semiconductor (CMOS) technology as depicted in Fig. 1. The synaptic weight can be stored on a capacitor and charging/discharging current can lead to potentiation and depression. Results and Conclusion: HSpice simulation results demonstrate that the average power, peak power, and area of the proposed circuit have been reduced by 6, 8 and 15%, respectively, in comparison with Indiveri's implementation. These improvements naturally lead to packing more STDP circuits onto the same substrate, when compared to previous proposals. Hence, this new implementation is quite interesting for real-world large neural networks

    Distributed Deep Learning in the Cloud and Energy-efficient Real-time Image Processing at the Edge for Fish Segmentation in Underwater Videos Segmentation in Underwater Videos

    Get PDF
    Using big marine data to train deep learning models is not efficient, or sometimes even possible, on local computers. In this paper, we show how distributed learning in the cloud can help more efficiently process big data and train more accurate deep learning models. In addition, marine big data is usually communicated over wired networks, which if possible to deploy in the first place, are costly to maintain. Therefore, wireless communications dominantly conducted by acoustic waves in underwater sensor networks, may be considered. However, wireless communication is not feasible for big marine data due to the narrow frequency bandwidth of acoustic waves and the ambient noise. To address this problem, we propose an optimized deep learning design for low-energy and real-time image processing at the underwater edge. This leads to trading the need to transmit the large image data, for transmitting only the low-volume results that can be sent over wireless sensor networks. To demonstrate the benefits of our approaches in a real-world application, we perform fish segmentation in underwater videos and draw comparisons against conventional techniques. We show that, when underwater captured images are processed at the collection edge, 4 times speedup can be achieved compared to using a landside server. Furthermore, we demonstrate that deploying a compressed DNN at the edge can save 60% of power compared to a full DNN model. These results promise improved applications of affordable deep learning in underwater exploration, monitoring, navigation, tracking, disaster prevention, and scientific data collection projects

    A novel design for quantum-dot cellular automata cells and full adders

    Get PDF
    Quantum-dot Cellular Automata (QCA) is a novel and potentially attractive technology for implementing computing architectures at the nano-scale. The basic Boolean primitive in QCA is the majority gate. In this study we present a novel design for QCA cells and another possible and unconventional scheme for majority gates. By applying these items, the hardware requirements for a QCA design can be reduced and circuits can be simpler in level and gate counts. As an example, a one bit QCA adder is constructed by applying our new scheme. Beside, we prove that how our reduction method decreases gate counts and levels in comparison to the other previous methods

    SAM: a unified self-adaptive multicompartmental spiking neuron model for learning with working memory

    Get PDF
    Working memory is a fundamental feature of biological brains for perception, cognition, and learning. In addition, learning with working memory, which has been show in conventional artificial intelligence systems through recurrent neural networks, is instrumental to advanced cognitive intelligence. However, it is hard to endow a simple neuron model with working memory, and to understand the biological mechanisms that have resulted in such a powerful ability at the neuronal level. This article presents a novel self-adaptive multicompartment spiking neuron model, referred to as SAM, for spike-based learning with working memory. SAM integrates four major biological principles including sparse coding, dendritic non-linearity, intrinsic self-adaptive dynamics, and spike-driven learning. We first describe SAM’s design and explore the impacts of critical parameters on its biological dynamics. We then use SAM to build spiking networks to accomplish several different tasks including supervised learning of the MNIST dataset using sequential spatiotemporal encoding, noisy spike pattern classification, sparse coding during pattern classification, spatiotemporal feature detection, meta-learning with working memory applied to a navigation task and the MNIST classification task, and working memory for spatiotemporal learning. Our experimental results highlight the energy efficiency and robustness of SAM in these wide range of challenging tasks. The effects of SAM model variations on its working memory are also explored, hoping to offer insight into the biological mechanisms underlying working memory in the brain. The SAM model is the first attempt to integrate the capabilities of spike-driven learning and working memory in a unified single neuron with multiple timescale dynamics. The competitive performance of SAM could potentially contribute to the development of efficient adaptive neuromorphic computing systems for various applications from robotics to edge computing

    Empirical metal-oxide RRAM device endurance and retention model for deep learning simulations

    No full text
    Memristive devices including resistive random access memory (RRAM) cells are promising nanoscale low-power components projected to facilitate significant improvement in power and speed of Deep Learning (DL) accelerators, if structured in crossbar architectures. However, these devices possess non-ideal endurance and retention properties, which should be modeled efficiently. In this paper, we propose a novel generalized empirical metal-oxide RRAM endurance and retention model for use in large-scale DL simulations. To the best of our knowledge, the proposed model is the first to unify retention-endurance modeling while taking into account time, energy, SET-RESET cycles, device size, and temperature. We compare the model to state-of-the-art and demonstrate its versatility by applying it to experimental data from fabricated devices. Furthermore, we use the model for CIFAR-10 dataset classification using a large-scale deep memristive neural network (DMNN) implementing the MobileNetV2 architecture. Our results show that, even when ignoring other device non-idealities, retention and endurance losses significantly affect the performance of DL networks. Our proposed model and its DL simulations are made publicly available

    Pairing frequency experiments in visual cortex reproduced in a neuromorphic STDP circuit

    No full text
    Previous studies show that the conventional pair-based form of STDP (PSTDP), is not able to account for many biological experiments including frequency-dependent pairing experiments performed in the visual cortex region of the brain. However, new improved synaptic plasticity rules, such as Triplet-based Spike Timing Dependent Plasticity (TSTDP), are capable of replicating many biological experiments outcomes including the results of the experiments carried out in the visual cortex. This paper proposes a programmable analog neuromorphic circuit, which is capable of reproducing pairing frequency experiments in the visual cortex. The circuit utilizes transistors working in their subthreshold region of operation. In addition, it implements a minimal model TSTDP learning rule, which needs a low number of transistors compared to its PSTDP circuit counterparts. These features result in low-power compact circuits that are suitable for large-scale VLSI implementations of Spiking Neural Networks (SNNs) with improved synaptic plasticity and learning capabilities

    Programmable spike-timing-dependent plasticity learning circuits in neuromorphic VLSI architectures

    No full text
    Hardware implementations of spiking neural networks offer promising solutions for computational tasks that require compact and low-power computing technologies. As these solutions depend on both the specific network architecture and the type of learning algorithm used, it is important to develop spiking neural network devices that offer the possibility to reconfigure their network topology and to implement different types of learning mechanisms. Here we present a neuromorphic multi-neuron VLSI device with on-chip programmable event-based hybrid analog/digital circuits; the event-based nature of the input/output signals allows the use of address-event representation infrastructures for configuring arbitrary network architectures, while the programmable synaptic efficacy circuits allow the implementation of different types of spike-based learning mechanisms. The main contributions of this article are to demonstrate how the programmable neuromorphic system proposed can be configured to implement specific spike-based synaptic plasticity rules and to depict how it can be utilised in a cognitive task. Specifically, we explore the implementation of different spike-timing plasticity learning rules online in a hybrid system comprising a workstation and when the neuromorphic VLSI device is interfaced to it, and we demonstrate how, after training, the VLSI device can perform as a standalone component (i.e., without requiring a computer), binary classification of correlated patterns
    corecore